Sacrificial Learning in Nonlinear Perceptrons

نویسنده

  • Peixun Luo
چکیده

Using the cavity method we consider the learning of noisy teachergenerated examples by a nonlinear student perceptron. For insufficient examples and weak weight decay, the activation distribution of the training examples exhibits a gap for the more difficult examples. This illustrates that the outliers are sacrificed for the overall performance. Simulation shows that the picture of the smooth energy landscape cannot describe the gapped distributions well, implying that a rough energy landscape may complicate the learning process.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Multi-Layer Perceptrons as Nonlinear Generative Models for Unsupervised Learning: a Bayesian Treatment

In this paper, multi-layer perceptrons are used as nonlinear generative models. The problem of indeterminacy of the models is resolved using a recently developed Bayesian method called ensemble learning. Using a Bayesian approach, models can be compared according to their probabilities. In simulations with artiicial data, the network is able to nd the underlying causes of the observations despi...

متن کامل

An Ensemble Learning Approach to Nonlinear Independent Component Analysis

Blind extraction of independent sources from their nonlinear mixtures is generally a very difficult problem. This is because both the nonlinear mapping and the underlying sources are unknown, and must be learned in an unsupervised manner from the data. We use multilayer perceptrons as nonlinear generative models for the data, and apply Bayesian ensemble learning for optimizing the model. In thi...

متن کامل

Nonlinear Classification using Ensemble of Linear Perceptrons

In this study we introduce a neural network ensemble composed of several linear perceptrons, to be used as a classifier that can rapidly be trained and effectively deals with nonlinear problems. Although each member of the ensemble can only deal with linear classification problems, through a competitive training mechanism, the ensemble is able to automatically allocate a part of the learning sp...

متن کامل

Multi-Layer Perceptrons with B-Spline Receptive Field Functions

Multi-layer perceptrons are often slow to learn nonlinear functions with complex local structure due to the global nature of their function approximations. It is shown that standard multi-layer perceptrons are actually a special case of a more general network formulation that incorporates B-splines into the node computations. This allows novel spline network architectures to be developed that c...

متن کامل

Hyperparameter Optimization with Factorized Multilayer Perceptrons

In machine learning, hyperparameter optimization is a challenging task that is usually approached by experienced practitioners or in a computationally expensive brute-force manner such as grid-search. Therefore, recent research proposes to use observed hyperparameter performance on already solved problems (i.e. data sets) in order to speed up the search for promising hyperparameter configuratio...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2000